With the development of depth sensors in recent years, RGBD object tracking has received significant attention. Compared with the traditional RGB object tracking, the addition of the depth modality can effectively solve the target and background interference. However, some existing RGBD trackers use the two modalities separately and thus some particularly useful shared information between them is ignored. On the other hand, some methods attempt to fuse the two modalities by treating them equally, resulting in the missing of modality-specific features. To tackle these limitations, we propose a novel Dual-fused Modality-aware Tracker (termed DMTracker) which aims to learn informative and discriminative representations of the target objects for robust RGBD tracking. The first fusion module focuses on extracting the shared information between modalities based on cross-modal attention. The second aims at integrating the RGB-specific and depth-specific information to enhance the fused features. By fusing both the modality-shared and modality-specific information in a modality-aware scheme, our DMTracker can learn discriminative representations in complex tracking scenes. Experiments show that our proposed tracker achieves very promising results on challenging RGBD benchmarks. Code is available at \url{https://github.com/ShangGaoG/DMTracker}.
translated by 谷歌翻译
由于与传统的基于RGB的跟踪相比,多模式跟踪的能力在复杂的情况下更准确和健壮,因此获得了关注。它的关键在于如何融合多模式数据并减少模式之间的差距。但是,多模式跟踪仍然严重遭受数据缺乏症的影响,从而导致融合模块的学习不足。我们没有在本文中构建这样的融合模块,而是通过将重要性附加到多模式的视觉提示中,为多模式跟踪提供了新的视角。我们设计了一种新型的多模式及时跟踪器(Protrack),可以通过及时范式将多模式输入传递到单个模态。通过最好地利用预先训练的RGB跟踪器在大规模学习的跟踪能力,我们的突起即使没有对多模式数据进行任何额外的培训,我们的突起也可以通过更改输入来实现高性能多模式跟踪。 5个基准数据集的广泛实验证明了所提出的突起的有效性。
translated by 谷歌翻译
数码相机通过图像信号处理器(ISP)将传感器原始读数转换为RGB图像。诸如图像去噪和颜色恒定的计算摄影任务通常在原始域中进行,部分原因是由于固有的硬件设计,而且由于引起了由直接传感器读数导致的噪声统计的吸引力的吸引力。尽管如此,与可用RGB数据的丰富和多样性相比,原始图像的可用性有限。最近的方法已经尝试通过估计RGB对原始映射来弥合这个差距:可手工制作的基于模型的方法,这些方法通常需要手动参数微调,而端到端的学习神经网络需要大量的培训数据,有时与复杂的训练程序,并且通常缺乏解释性和参数控制。为了解决这些现有的限制,我们提出了一种基于混合模型的基于混合模型和数据驱动的ISP,其构建在规范ISP运营中,并且是学习和可解释的。我们所提出的可逆模型,能够在原始和RGB域之间双向映射,采用丰富的参数表示的端到端学习,即词典,即没有直接参数监督,另外启用简单且合理的数据增强。我们证明我们的数据生成过程的价值在原始图像重建和原始图像去噪任务下,在两者中获得最先进的性能。此外,我们表明我们的ISP可以从少数数据样本中学习有意义的映射,并且尽管只有少数或零地面标签,但基于大字典的数据增强训练的那种培训的培训模型是有竞争力的。
translated by 谷歌翻译
高动态范围(HDR)成像在现代数字摄影管道中具有根本重要性,并且尽管在图像上变化照明,但仍用于生产具有良好暴露区域的高质量照片。这通常通过在不同曝光时拍摄多个低动态范围(LDR)图像来实现。然而,由于补偿不良的运动导致人工制品如重影,过度暴露的地区和未对准误差。在本文中,我们提出了一种新的HDR成像技术,可以专门模拟对准和曝光不确定性以产生高质量的HDR结果。我们介绍了一种使用HDR感知的HDR感知的不确定性驱动的注意力映射来联合对齐和评估对齐和曝光可靠性的策略,该注意力映像鲁棒地将帧合并为单个高质量的HDR图像。此外,我们介绍了一种渐进式多级图像融合方法,可以以置换不变的方式灵活地合并任何数量的LDR图像。实验结果表明,我们的方法可以为最先进的高达0.8dB的PSNR改进,以及更好的细节,颜色和更少人工制品的主观改进。
translated by 谷歌翻译
The current trend of applying transfer learning from CNNs trained on large datasets can be an overkill when the target application is a custom and delimited problem with enough data to train a network from scratch. On the other hand, the training of custom and lighter CNNs requires expertise, in the from-scratch case, and or high-end resources, as in the case of hardware-aware neural architecture search (HW NAS), limiting access to the technology by non-habitual NN developers. For this reason, we present Colab NAS, an affordable HW NAS technique for producing lightweight task-specific CNNs. Its novel derivative-free search strategy, inspired by Occam's razor, allows it to obtain state-of-the-art results on the Visual Wake Word dataset in just 4.5 GPU hours using free online GPU services such as Google Colaboratory and Kaggle Kernel.
translated by 谷歌翻译
In this paper, we propose a novel 3D graph convolution based pipeline for category-level 6D pose and size estimation from monocular RGB-D images. The proposed method leverages an efficient 3D data augmentation and a novel vector-based decoupled rotation representation. Specifically, we first design an orientation-aware autoencoder with 3D graph convolution for latent feature learning. The learned latent feature is insensitive to point shift and size thanks to the shift and scale-invariance properties of the 3D graph convolution. Then, to efficiently decode the rotation information from the latent feature, we design a novel flexible vector-based decomposable rotation representation that employs two decoders to complementarily access the rotation information. The proposed rotation representation has two major advantages: 1) decoupled characteristic that makes the rotation estimation easier; 2) flexible length and rotated angle of the vectors allow us to find a more suitable vector representation for specific pose estimation task. Finally, we propose a 3D deformation mechanism to increase the generalization ability of the pipeline. Extensive experiments show that the proposed pipeline achieves state-of-the-art performance on category-level tasks. Further, the experiments demonstrate that the proposed rotation representation is more suitable for the pose estimation tasks than other rotation representations.
translated by 谷歌翻译
Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains highly challenging. Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM. However, current comparisons ignore the fact that image content affects quality assessment as comparisons only occur between images of similar content. This restricts the diversity and number of image pairs that the model is exposed to during training. In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons. Secondly, we introduce listwise comparisons to provide a holistic view to the model. By including differentiable regularizers, derived from correlation coefficients, models can better adjust predicted scores relative to one another. Evaluation on multiple benchmarks, covering a wide range of distortions and image content, shows the effectiveness of our learning scheme for training image quality assessment models.
translated by 谷歌翻译
自然语言处理(NLP)是一个人工智能领域,它应用信息技术来处理人类语言,在一定程度上理解并在各种应用中使用它。在过去的几年中,该领域已经迅速发展,现在采用了深层神经网络的现代变体来从大型文本语料库中提取相关模式。这项工作的主要目的是调查NLP在药理学领域的最新使用。正如我们的工作所表明的那样,NLP是药理学高度相关的信息提取和处理方法。它已被广泛使用,从智能搜索到成千上万的医疗文件到在社交媒体中找到对抗性药物相互作用的痕迹。我们将覆盖范围分为五个类别,以调查现代NLP方法论,常见的任务,相关的文本数据,知识库和有用的编程库。我们将这五个类别分为适当的子类别,描述其主要属性和想法,并以表格形式进行总结。最终的调查介绍了该领域的全面概述,对从业者和感兴趣的观察者有用。
translated by 谷歌翻译
尽管将发票内容作为元数据存储以避免纸质文档处理可能是未来的趋势,但几乎所有每日发行的发票仍在纸上打印或以PDF等数字格式生成。在本文中,我们介绍了从扫描文档图像中提取信息的OCRMiner系统,该系统基于文本分析技术与布局功能结合使用(半)结构化文档的索引元数据。该系统旨在以人类读者使用的类似方式处理文档,即在协调决策中采用不同的布局和文本属性。该系统由一组互连模块组成,该模块以(可能是错误的)基于字符的输出从标准OCR系统开始,并允许应用不同的技术并在每个步骤中扩展提取的知识。使用开源OCR,该系统能够以90%的英语恢复发票数据,而捷克设置的发票数据为88%。
translated by 谷歌翻译
对于视觉操作任务,我们旨在表示具有语义上有意义的功能的图像内容。但是,从图像中学习隐式表示通常缺乏解释性,尤其是当属性交织在一起时。我们专注于仅从2D图像数据中提取删除的3D属性的具有挑战性的任务。具体而言,我们专注于人类外观,并从RGB图像中学习穿着人类的隐性姿势,形状和服装表示。我们的方法学习了这三个图像属性的分解潜在表示的嵌入式,并通过2到3D编码器解码器结构可以有意义地重新组装特征和属性控制。 3D模型仅从学到的嵌入空间中的特征图推断出来。据我们所知,我们的方法是第一个解决这个高度不足的问题的跨域分解的方法。我们在定性和定量上证明了框架在虚拟数据上3D重建中转移姿势,形状和服装的能力,并显示隐性形状损失如何使模型恢复细粒度重建细节的能力有益。
translated by 谷歌翻译